12 research outputs found

    Discovering cultural differences (and similarities) in facial expressions of emotion

    Get PDF
    Understanding the cultural commonalities and specificities of facial expressions of emotion remains a central goal of Psychology. However, recent progress has been stayed by dichotomous debates (e.g., nature versus nurture) that have created silos of empirical and theoretical knowledge. Now, an emerging interdisciplinary scientific culture is broadening the focus of research to provide a more unified and refined account of facial expressions within and across cultures. Specifically, data-driven approaches allow a wider, more objective exploration of face movement patterns that provide detailed information ontologies of their cultural commonalities and specificities. Similarly, a wider exploration of the social messages perceived from face movements diversifies knowledge of their functional roles (e.g., the ‘fear’ face used as a threat display). Together, these new approaches promise to diversify, deepen, and refine knowledge of facial expressions, and deliver the next major milestones for a functional theory of human social communication that is transferable to social robotics

    Reverse Engineering Psychologically Valid Facial Expressions of Emotion into Social Robots

    Get PDF
    Social robots are now part of human society, destined for schools, hospitals, and homes to perform a variety of tasks. To engage their human users, social robots must be equipped with the essential social skill of facial expression communication. Yet, even state-of-the-art social robots are limited in this ability because they often rely on a restricted set of facial expressions derived from theory with well-known limitations such as lacking naturalistic dynamics. With no agreed methodology to objectively engineer a broader variance of more psychologically impactful facial expressions into the social robots' repertoire, human-robot interactions remain restricted. Here, we address this generic challenge with new methodologies that can reverse-engineer dynamic facial expressions into a social robot head. Our data-driven, user-centered approach, which combines human perception with psychophysical methods, produced highly recognizable and human-like dynamic facial expressions of the six classic emotions that generally outperformed state-of-art social robot facial expressions. Our data demonstrates the feasibility of our method applied to social robotics and highlights the benefits of using a data-driven approach that puts human users as central to deriving facial expressions for social robots. We also discuss future work to reverse-engineer a wider range of socially relevant facial expressions including conversational messages (e.g., interest, confusion) and personality traits (e.g., trustworthiness, attractiveness). Together, our results highlight the key role that psychology must continue to play in the design of social robots

    Equipping Social Robots with Culturally-Sensitive Facial Expressions of Emotion Using Data-Driven Methods

    Get PDF
    Social robots must be able to generate realistic and recognizable facial expressions to engage their human users. Many social robots are equipped with standardized facial expressions of emotion that are widely considered to be universally recognized across all cultures. However, mounting evidence shows that these facial expressions are not universally recognized - for example, they elicit significantly lower recognition accuracy in East Asian cultures than they do in Western cultures. Therefore, without culturally sensitive facial expressions, state-of-the-art social robots are restricted in their ability to engage a culturally diverse range of human users, which in turn limits their global marketability. To develop culturally sensitive facial expressions, novel data-driven methods are used to model the dynamic face movement patterns that convey basic emotions (e.g., happy, sad, anger) in a given culture using cultural perception. Here, we tested whether such dynamic facial expression models, derived in an East Asian culture and transferred to a popular social robot, improved the social signalling generation capabilities of the social robot with East Asian participants. Results showed that, compared to the social robot's existing set of facial `universal' expressions, the culturally-sensitive facial expression models are recognized with generally higher accuracy and judged as more human-like by East Asian participants. We also detail the specific dynamic face movements (Action Units) that are associated with high recognition accuracy and judgments of human-likeness, including those that further boost performance. Our results therefore demonstrate the utility of using data-driven methods that employ human cultural perception to derive culturally-sensitive facial expressions that improve the social face signal generation capabilities of social robots. We anticipate that these methods will continue to inform the design of social robots and broaden their usability and global marketability

    Distinct facial expressions represent pain and pleasure across cultures

    Get PDF
    open access articleReal-world studies show that the facial expressions produced during pain and orgasm—two different and intense affective experiences—are virtually indistinguishable. However, this finding is counterintuitive, because facial expressions are widely considered to be a powerful tool for social interaction. Consequently, debate continues as to whether the facial expressions of these extreme positive and negative affective states serve a communicative function. Here, we address this debate from a novel angle by modeling the mental representations of dynamic facial expressions of pain and orgasm in 40 observers in each of two cultures (Western, East Asian) using a data-driven method. Using a complementary approach of machine learning, an information-theoretic analysis, and a human perceptual discrimination task, we show that mental representations of pain and orgasm are physically and perceptually distinct in each culture. Cross-cultural comparisons also revealed that pain is represented by similar face movements across cultures, whereas orgasm showed distinct cultural accents. Together, our data show that mental representations of the facial expressions of pain and orgasm are distinct, which questions their nondiagnosticity and instead suggests they could be used for communicative purposes. Our results also highlight the potential role of cultural and perceptual factors in shaping the mental representation of these facial expressions. We discuss new research directions to further explore their relationship to the production of facial expressions

    Cultural facial expressions dynamically convey emotion category and intensity information

    Get PDF
    Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender’s behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions—"happy,” “surprise,” “fear,” “disgust,” “anger,” and “sad”—and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as “anger,” “disgust,” and “fear,” but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions

    Facial expressions elicit multiplexed perceptions of emotion categories and dimensions

    Get PDF
    Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,1, 2, 3, 4, 5 including specific categories, such as “anger,” and broader dimensions, such as “negative valence, high arousal.”6, 7, 8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information—i.e., specific categories and broader dimensions—via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver’s perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10, 11, 12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent—i.e., multiplex—categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results—based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms—show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities

    The dual role of culture on signalling and receiving dynamic facial expressions

    No full text
    Human survival critically relies on communicating a broad set of social messages including physical states and mental states. The prerequisite for any successful social communication is the shared knowledge between the sender and the receiver about what and how a specific social signal is used. To communicate the broad set of social messages in daily life, human beings have developed complex facial movement patterns as one of the most important and powerful social signals. With increasing globalization, cross-cultural interactions are fast becoming integral to modern life, which presents increasing pressure for cross-cultural communication. Specifically, a broad set of facial expressions including conversational facial expressions is critical for clear communication because they guide the flow of social exchanges. Yet, our knowledge of such facial movement patterns is relatively limited in terms of their functions in different cultural context – for example, whether these important everyday facial expressions are understood across cultures or cause cross-cultural confusions. In this thesis, I explored how facial movement patterns are used in Western and East Asian culture to communicate a broad set of social messages including physical states and mental states. Specifically, I objectively characterized the structure of dynamic facial expression patterns using 4D computer graphics and a data-driven social psychophysics method. I then examined the role of culture in signaling and receiving facial expressions using the signal detection theory and Mutual Information analysis. Together, my results reveal for the first time how specific facial movement patterns are used to communicate a broad set of social messages in Western and East Asian culture and how culture shapes the signalling and perception of such facial expressions in cross-cultural communication. Finally, I discussed the implication of my results in the field of psychology, computer science and social robotics, with links to my future work on developing a mathematical model of face social signalling and transfer this knowledge to socially and culturally sensitive conversational agents

    Stress intensity factors and weight functions for cracks in front of notches

    No full text
    The knowledge of stress intensity factors for cracks at notch roots is important for the fracture mechanical treatment of real components. Stress intensity factor solutions are available only for special notches and externally applied loads. For the treatment of more complex loadings as thermal stresses near the notch root the weight function is needed in addition. In the first part of this report weight functions for cracks in front of internal notches are derived from stress intensity factor solutions under external loading available in the literature. The second part deals with cracks in front of edge notches. Limit cases of stress intensity factors are derived which allow to estimate stress intensity factors for cracks in front of internal elliptical notches with arbitrary aspect ratio of the ellipse and for external notches. (orig.)Die Kenntnis der Spannungsintensitaetsfaktoren fuer Risse an Kerben hat grosse Bedeutung fuer die bruchmechanische Behandlung realer Bauteile. Spannungsintensitaetsfaktoren sind fuer einige Kerbtypen bei aeusserer Belastung verfuegbar. Fuer die Bewertung komplexerer Spannungen, z.B. thermische Spannungen von der Kerbe, ist die Kenntnis der bruchmechanischen Gewichtsfunktion erforderlich. Im ersten Teil des Berichts wird die Gewichtsfunktion fuer den Riss an einer elliptischen Innenkerbe aus Spannungsintensitaetsfaktoren aus der Literatur berechnet. Der zweite Teil befasst sich mit der einseitigen Aussenkerbe. Loesungen einfacher Grenzfaelle werden angegeben und es wird eine Interpolationsmethode vorgestellt, die es gestattet, beliebige dazwischenliegende Faelle zu behandeln. (orig.)SIGLEAvailable from TIB Hannover: ZA 5141(5254) / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekDEGerman

    Dynamic face movement texture enhances the perceived realism of facial expressions of emotion

    No full text
    Most socially interactive virtual agents that generate facial expressions lack critical visual features such as expressive wrinkles, which could reduce their realistic appearance. Here, we examined the impact of dynamic facial texture on perceptions of realism of facial expressions of emotion and identified the emotion-specific features that enhance this perception. In a human perceptual judgment task, participants (20 white Westerners, 10 female) viewed pairs of facial expressions of the six classic emotions - happy, surprise, fear, disgust, anger and sad - with and without dynamic textures and selected the most realistic one from the pair. Analysis of participant choices showed that facial expressions with dynamic texture are perceived significantly more often as more realistic for all emotions except sad. Further analysis of the facial expression signals showed that emotion-specific features, such as darker forehead furrows in surprise, unilateral nose wrinkling in disgust, and shade variations around the cheeks in happy, enhanced perceptions of realism. Together, our results highlight the importance of equipping virtual agents with dynamic face movement texture to produce realistic facial expressions of emotion

    Facial expressions of emotion include iconic signals of rejection and acceptance

    No full text
    What are the evolutionary origins of facial expressions? One theory posits that they evolved from facial movements that control sensory stimulation (e.g., closing eyes to reduce visual input). Such signals would afford a salient iconicity that could facilitate communication. Here, we examined whether facial expressions of emotion include expansion and contraction facial movements that serve as icons of rejection and acceptance. Using the data-driven method of reverse correlation, we first modelled dynamic facial expressions of the six classic emotions – happy, surprise, fear, disgust, anger and sad – in each of 60 participants (Western, 31 females). On each of 2400 experimental trials, participants categorized a facial animation comprising a randomly activated subset of individual facial movements (Action Units; AUs) according to the six classic emotions or ‘other.’ We then modelled the dynamic AUs associated with each participant’s emotion response using non-parametric permutation inference (p < 0.05), resulting in 360 dynamic facial expression models (60 participants X 6 emotions). Next, we identified in each facial expression model, iconic facial movements and found that expansion movements – e.g., brow raising (AU1-2), eye opening (AU5), nostril dilating (AU38) and mouth gaping (AU26) – are primarily associated with acceptance messages (e.g., happy, surprise). Contraction movements – e.g., brow lowering (AU4), wincing (AU7), nose wrinkling (AU9), lip pinching (AU23) – are primarily associated with rejection messages (e.g., fear, disgust, anger, sad). Finally, we replicated these results with a separate set of facial expressions of conversational messages – thinking, interested, bored and confused (20 Westerners, 10 females). Together, our results show that facial expressions comprise latent iconic facial signals that represent rejection or acceptance in line with their social function. Future research will address how this iconicity could be exapted to ground more complex and abstract meanings in multimodal face-to-face communication
    corecore